Goto

Collaborating Authors

 Rush County


Understanding CNN Hidden Neuron Activations Using Structured Background Knowledge and Deductive Reasoning

arXiv.org Artificial Intelligence

A major challenge in Explainable AI is in correctly interpreting activations of hidden neurons: accurate interpretations would provide insights into the question of what a deep learning system has internally detected as relevant on the input, demystifying the otherwise black-box character of deep learning systems. The state of the art indicates that hidden node activations can, in some cases, be interpretable in a way that makes sense to humans, but systematic automated methods that would be able to hypothesize and verify interpretations of hidden neuron activations are underexplored. In this paper, we provide such a method and demonstrate that it provides meaningful interpretations. Our approach is based on using large-scale background knowledge approximately 2 million classes curated from the Wikipedia concept hierarchy together with a symbolic reasoning approach called Concept Induction based on description logics, originally developed for applications in the Semantic Web field. Our results show that we can automatically attach meaningful labels from the background knowledge to individual neurons in the dense layer of a Convolutional Neural Network through a hypothesis and verification process.


Explaining Deep Learning Hidden Neuron Activations using Concept Induction

arXiv.org Artificial Intelligence

One of the current key challenges in Explainable AI is in correctly interpreting activations of hidden neurons. It seems evident that accurate interpretations thereof would provide insights into the question what a deep learning system has internally \emph{detected} as relevant on the input, thus lifting some of the black box character of deep learning systems. The state of the art on this front indicates that hidden node activations appear to be interpretable in a way that makes sense to humans, at least in some cases. Yet, systematic automated methods that would be able to first hypothesize an interpretation of hidden neuron activations, and then verify it, are mostly missing. In this paper, we provide such a method and demonstrate that it provides meaningful interpretations. It is based on using large-scale background knowledge -- a class hierarchy of approx. 2 million classes curated from the Wikipedia Concept Hierarchy -- together with a symbolic reasoning approach called \emph{concept induction} based on description logics that was originally developed for applications in the Semantic Web field. Our results show that we can automatically attach meaningful labels from the background knowledge to individual neurons in the dense layer of a Convolutional Neural Network through a hypothesis and verification process.


Call for papers: Special Issue on Machine Learning for Knowledge Base Generation and Population

#artificialintelligence

In the last decade, in the Semantic Web field, knowledge bases have attracted tremendous interest from both academia and industry and many large knowledge bases are now available. However, both generation of new knowledge and population of already existing knowledge bases with new facts face several challenges. Most of the time knowledge bases have been manually built, resulting in a highly specialistic and time consuming activity. Nevertheless, sources of unstructured and semi-structured data are still growing at a much faster rate than structured ones, as such it could be desirable to exploit such a large non-structured sources to populate structured knowledge bases. In the Semantic Web, a major cornerstone of knowledge bases are ontologies and schemas that play a key role for providing common vocabularies and for describing and constructing the Web of Data.